Skip to main content

Mirror, mirror on the wall... Am I a critical thinker after all?

About this video

This session was presented at NDC Porto.

How many times have you looked back at a project only to find a mix of technologies, architectural styles, patterns, and practices that somehow just don’t fit together, only to conclude: This is unmaintainable… Let’s rewrite the whole thing! As tech professionals, we usually pride ourselves in our ability to think critically because, as part of our work, we continuously tackle complex problems. But are we really critical thinkers? If so, which decisions led to this point? Were they made consciously? What decision framework was used to structure our thoughts and avoid biases?

Over the last years of my career, I have applied practices to decision-making that have significantly improved my decision-making and how I evaluate and challenge decisions made by others. In this session, I’ll share the essential building blocks that improve critical thinking, help break through biases, and improve our intent and communication when making impactful decisions. You’ll learn to apply tactics to help identify assumptions, evaluate options objectively, and assess risk. You’ll walk out of this session with actionable takeaways that will help strengthen your decisions in our complex and ever-changing technology landscape.

đź”—Transcription

00:05 Laila Bougria
Hello, everyone. Welcome. Oh my God, that's pretty loud. I have to be mindful of not being my usual loud self maybe. How are you finding the conference? Good? Yeah. Cool. Well, thank you for being here at the last slot of the first day. That's perseverance what I call.
00:26 Laila Bougria
My name is Laila Bougria. I'm a software engineer and solution architect. I work for a company called Particular Software where we build NServiceBus and messaging middleware technology and a whole platform around that. Now, I've been a software engineer for almost two decades, although I don't always like to admit it that it's been so long, and throughout that time, I've actually had the chance to work on a lot of different projects. Now, some of them were really memorable, really great. Those were very cool systems. They're still running in production today. They're being still actively maintained. And although there's always something that you want to change if you had the ability to go back, all in all, I'm proud of the result and I'm really happy with what we were able to deliver there as a team.
01:17 Laila Bougria
Some other projects were like they weren't bad, but they weren't great either. I probably have a more faint memory of them, maybe. There's always some of those, but then there's also a few projects that still cause me nightmares to this day. The real horror stories where we had hidden monsters in the code base or sometimes not even that hidden, right? Sometimes they were there in your face. And to make it a little bit concrete, I'll give you a specific example.
01:49 Laila Bougria
I was working on a project quite a few years ago and they called me and they said, "This application is running in production. It's actually pretty successful right now and we want to extend it and I want to add a bunch of features to it. So would you like to work on this line?" Cool. I knew actually people on the team and I was like, awesome. So I started to look at the code base and I saw, okay, when it comes to data retrieval and how we changed it and all of that, they were using an ORM, an object relational mapper. In this case, that was NHibernate. Okay, totally fine. Nothing wrong with that. I started to look through the code and I also saw that there were some stored procedures in there because they optimized a few queries. Fine. Then there was a little bit of hard-coded SQL in there, which that wasn't great, but there's always something that we can criticize, right? But what was really horrible about this specific project is that there wasn't just a single ORM being used. No. They used two different ORMs. Now, I'm not talking about two different components that basically participate in a larger system. No, no. One monolithic application, two ORMs.
03:05 Laila Bougria
Now, why was that so horrible? Because basically in order to query any type of entity, you had to first understand how's this being managed? Is it NHibernate or is it Entity Framework Core? And which type of session do I then need to use our data context? And to make things worse, sometimes there were entities that needed to be joined and some of them were being managed by Entity Framework Core and some of them were being managed by NHibernate. And then it became really messy because basically what happened is that those entities were duplicated.
03:38 Laila Bougria
And I think you can imagine from here, I'm already seeing some smiles, that that was not easy to work with. So I wondered, I can't be the only one that faces or that faced problems like these. So I asked on Twitter, and as you can suspect, I got a very nice stream of other horror stories. So feel free to follow the link if you want to have a laugh or a cry or both. And maybe even if this triggered some of your own memories, feel free to add to that. It's always nice to have some additional data points. I'll share with you some of the tweets that stood out to me.
04:14 Laila Bougria
The first one here is by James, where apart from the terrible situation he found himself in, what stood out to me is also the solution where basically they had to sell the thing to another company and basically other poor people inherited that problem. The question is did they fix it or did they just rewrite.
04:39 Laila Bougria
Another tweet here via Maarten, in which he points out that basically sometimes we build features that our users don't really need, and that's just truth. Sometimes we build things that our users don't really need. Why? Because we're not listening or because we're not listening closely enough.
05:00 Laila Bougria
And then another one of my favorites, I forgot about the exception handling. It's fine. It's worked perfectly well for this long. There's nothing that could go wrong now, right? And then your system dies on a Black Friday. Awesome.
05:15 Laila Bougria
But there's another tweet that stood out to me, and it's this one by Scott where he said, "I hung up a little bit on the term decisions because what I have seen is actually an accumulation of decisions that weren't really made consciously." People weren't really aware that they were making an impactful decision at the time they did something and then it just got into the code. Maybe it wasn't review, maybe it wasn't thought about sufficiently enough, and then it starts growing like a weed. And when we started to understand, oh, this is actually a very big problem, it's way too messy, and what is our solution? Let's rewrite.
05:55 Laila Bougria
Anyway, that's just one of the common root causes of this terrible aftermath, if you will. It could be unconscious decisions, but sometimes it could also be insufficient evaluation of decisions that were made before. Sometimes we just accept the status quo because we don't want to change it. It's uncomfortable. And then how's that going to impact everything? But sometimes choosing to hold on to what we were doing before can actually make things a lot worse. But it could also be that decisions in your team are being made by an authority. That could be a team lead, it could be a software architect. Now, hopefully that is changing a little bit, but I've seen many environments where these types of decisions are made by someone who's sitting in an ivory tower and then the rest of the team just has to deal with them and is fighting code to do stuff that it was never designed to do.
06:50 Laila Bougria
It could also be CV-driven development. We're at this amazing conference here looking at so many different types of technologies and architectural styles and frameworks, and we're so eager to try it out. So we go back to our project and we're like, let's use this framework. Do we really, really need it for this application that we're building? But it would be so much fun. Or finally copy-driven development, where it's not only about actually copying code, although that's also a concern, but it could also be the copying of behavior, oh, I used this architectural style on this other project and it was super successful, so I'm just going to reuse it again. But what if the context is different? Maybe that's complete overkill, or maybe it's completely insufficient and you actually need a more extensive type of architecture to handle that problem.
07:45 Laila Bougria
Luckily, critical thinking can basically help us here. Now, the thing is that I'm wondering, who thinks that they're a critical thinker in the audience? I think you're probably just too scared to put your hand up because what the talk is named, right? Now, the thing is I understand why we would all think that we're critical thinkers. Why? Because we are facing complex problems all the time and it's our job to untangle them and to deal with them, but the thing is that there are a lot of things that come to critical thinking. I thought I was a critical thinker because I always like to ask why. I didn't just accept stuff that was handed to me. I was always trying to understand how does this work and why does it work that way and so forth. But then I joined Particular Software and there they actually had a very structured process that forced you to apply critical thinking in a structured manner. And that's when I had to think of, okay, wait, what is this critical thinking thing again? Did I get that right?
08:50 Laila Bougria
So it's about the analysis of available facts and evidence, observations and arguments so that we can apply rational and skeptical thinking so that we can make decisions. And I look at that definition, I'm like, am I a critical thinker? Maybe. And then critical thinking, a critical thinker as a person who has been trained in its disciplines or actually practices those skills. Have I been really trained in critical thinking? Not really that I can remember. So at that point, I had to take a very good look in the mirror and admit that up to that point, I wasn't really a critical thinker at all. But the thing is, the good news is that we can all really become critical thinkers and there are many building blocks that can help us to basically help us structure that.
09:45 Laila Bougria
Now, don't get too hung up on all of these yet. I'm going to go through all of these one by one. And what I'm also going to do is regularly, once we look at each individual block, I will circle back to that multiple ORM horror story and try to apply that, those learnings to make it a little bit more concrete and to understand how that would look like so that we can deepen our understanding as well.
10:12 Laila Bougria
Let's start with the first part, the problem statement. Now, I find that this is often overlooked, but it's actually a really, really important step, just trying to understand the problem. Now, why is this so hard for us? Because we are hardwired to immediately go into solution mode. We are there to solve problems and we don't take sufficient time to actually think about the problem. It's basically, as Gilbert Chesterson said, it isn't that they can't see the solution. It's that they cannot see the problem. And the thing is that as engineers, we experience this all the time, what is the most difficult part about fixing a bug? Understanding the context, understanding the problem. What is happening in which situations does this occur? And then usually, once we can pinpoint that, the solution is usually straightforward, just a couple of lines of code and we fixed it, but it's understanding the problem and the context and the situation in which that occurs that takes so much time, that we can't even put some time element on.
11:24 Laila Bougria
So I find it interesting when you take a step back that even though we experience that all the time, we don't really apply that to software when we're building features, when we're choosing our architectures and things like that. So what we basically want to do is we want to write down the problem in simple terms. What does that mean? It means that anyone should be able to understand what the problem is, whether they're even technical or non-technical, anyone that's basically part of that product or project that you are working on.
11:57 Laila Bougria
Now, the thing is that when I look at most backlogs that I've come across, there's hardly any of those issues that are raised as a problem. Usually there's a solution proposal, implement this or do that, but actually, by doing that, we are immediately biasing anyone who's reading that issue into that specific solution. So we need to stay away from any solution proposals and rather just focus on documenting the problem itself.
12:26 Laila Bougria
Now, it also cannot be framed in technical terms. Why? This is just a red flag, if you will, that is going to point out that you haven't really identified the problem because usually the problem is not technical. We use technology to solve problems, but the problems are rarely technical. So if you find yourself using any technical terms, that's a red flag and you should be digging deeper. Now, how do you do that? You could use something like the five why's technique. Who's heard of that before? Okay. So for those of you who haven't heard of that before, the idea is that you'll just going to continue digging deeper and continue asking why until you can basically identify the root problem that you're trying to address.
13:11 Laila Bougria
And then there's also how we frame it because we know this, but we also currently underestimate it, constantly underestimate it, and it's that words have power. The nuances that we place when we frame a problem, the way that we choose our words and where we put emphasis, that has a very, very big impact because it opens up the problem solution space. And a very good example of that is the elevator problem.
13:42 Laila Bougria
Now, imagine that you're a building owner and you're getting feedback from people in the building that the elevator is way too slow, and that's so annoying and there are so many complaints. Now, a valid problem statement would be to say the elevator is too slow, but another equally valid problem statement would be to say that the wait time in the elevator is annoying. Now, that difference may seem irrelevant at first, but it actually has a massive impact. Why? Because it opens up the applicable solution space.
14:17 Laila Bougria
Now, what does that really mean? Let me give you an example. If we present that first problem to a group of people, then they might look at that and say, "We need to change the algorithm of the elevator," or, "You know what? We could just upgrade the motor," or, "We've had it for a while. Maybe we should just replace it all together." But when you present the same group with that second problem statement, then it might open up completely different solutions, like for example, "What if we place mirrors in the elevators? That would keep the vein amongst us busy?," or, "What if we hang music or I don't know, an iPad showing the weather or whatever it is?" Now the thing is that if that means that users stop complaining about the wait time in the elevator, you've addressed the problem, right? It's not necessarily by making it faster that that's the only way that you can solve it.
15:16 Laila Bougria
Now, what I also find that is really missing from that first problem statement is the stakeholder's angle because who's the stakeholder in this scenario? The user that's riding the elevator. And what are their findings? That the wait time is annoying. Now, in order to identify those stakeholders, we need to start there. Now, the thing is stakeholders could be basically anyone who's directly impacted or involved in your projects. That could be customers, could also be users. They're not necessarily the same group. It could be financial type of stakeholders. Anyone that's impacted by this. Now, the thing is that identifying them is going to be usually a one-time exercise because they tend to remain stable over time. They don't change a lot. So once you do it once, you can continue using them and referring to them.
16:09 Laila Bougria
It's also important to note that some of your stakeholders may have conflicting interests. This group of people wants that, but that conflicts with actually this other needs. It's important to be aware of that so that we can connect that and take that into consideration when we're looking into possible solutions to a problem as well so that we can balance those conflicting interests as well.
16:35 Laila Bougria
Now that we know our stakeholders, we need to understand what are their needs, and in order to understand their needs, we need to talk to them. Now, the thing is that this may seem obvious, but walking up to a stakeholder and engaging in a conversation and asking them what is it you really need is not really a good question to ask. Why? Because stakeholders don't always fully understand what they need. Usually they're already using your product, your system, whatever it is, and they're also going to go in solution mode. So it's important that we also adjust the type of questions that we ask them so that we can identify the root need. What are they really trying to do? So we need to ask different questions like for example, what goal are you trying to accomplish? What's the end result that you are basically trying to get to? Or for example, what friction are you experiencing and in what context does this occur? And then we can try to dig deeper and understand the underlying need.
17:42 Laila Bougria
Now, what we want to do in those same conversations is then frame that need for them and validate that them, "Is this what you meant? Is this actually what you need?" and basically do that throughout those conversations. And again, we want to write these down and keep them as well. What's also important is that when we write them down, that we frame them from the stakeholder's perspective. Why? Because that will also, using a little bit of empathy, put ourselves into their place so that we can understand what are they trying to do and how can we best help them to accomplish that.
18:17 Laila Bougria
Now, once we understand these are our stakeholders and that are their needs, we can connect that back to our problem and basically start asking questions when it comes to that problem and ask, to whom is this a problem? Who are the key stakeholders that are affected by this specific problem? We can also ask, for example, which need is being addressed if we solve this particular problem that we are looking at? And then we can also see are there any conflicting needs that we need to address because that's something that we want to keep in mind if we start looking into solutions later. And that will basically help us and support us to ensure that we're making the right decisions as well.
19:00 Laila Bougria
I know I took a lot of time to talk about just the problem, but that's with good reason. To put it in Einstein's words, if he had an hour to solve a problem, he'd spend 55 minutes thinking about the problem and only five minutes thinking about the solution. And that's also been my experience once I've started to do this in a more structured manner because as I way better understood what the problem was about without going into solution mode, and yes, this is really, really hard and it takes a lot of practice, but once you start doing that more and more and more, you can detach those things from each other and you'll get a way better deeper understanding.
19:42 Laila Bougria
So again, state that problem in very simple terms. Anyone should be able to understand that, right? Forget about solution proposals, even if you really, really want to do it a specific kind of way. Don't let it hide in there. Consider how wording is also going to affect the applicable problem solution space. And finally, make sure that you also identify and understand your stakeholders, their needs and how they connect to the specific problem. That can also, for example, help you prioritize, which are the most important problems that I should be working on, and which problems I shouldn't be working on at all, even though that's not always something that we want to hear.
20:27 Laila Bougria
So if I go back to our horror story, multiple ORMs single system, and I would try to apply this, then it's just really proving my point because in that specific scenario, there wasn't really a problem to solve, to be honest. That was a clear case of CV-driven development where there were people on the team back then that really wanted to learn Entity Framework Core, had the best intentions in the world to actually replace the whole thing, but then reality kicks in and there are deadlines and things like that, and you're just left with a mess that starts to grow and people add more duplication of entities.
21:03 Laila Bougria
But for the sake of this example, let's just assume that there was an actual problem to solve. Let's just say that there was a change in the organization's compliance policy and they said, "Well, from now on, we can only really use dependencies in a system that offer some kind of support because otherwise we can't really do that." So we could frame the problem as follows, our application is using dependencies that don't offer any formal support, and therefore, it's not compliant with our organizational compliance policy. If we didn't connect that back to the stakeholder and the need, we can link that to the organization, which is also a stakeholder. And in this case, the organization, in the context of building software systems, needs to exclusively select dependencies, vendors that basically offer formal support contracts.
21:59 Laila Bougria
Now, once we have a way better understanding of the problem and how that connects to the needs, we can finally get into solution mode, and we'll do that by evaluating possible solutions. When we do so, it's important that we think outside the box and that we try to think of options that may even be counterintuitive in the beginning, but just keep an open mind. Whenever someone said to me, think outside the box, I was like, "Yeah, that's easily said. What do you actually mean?" So one of the things that I've actually found is the best way to force yourself to think outside the box is to not do it alone because by basically having a more diverse team, we can do that together and we can pull each other out of the box.
22:47 Laila Bougria
Now, there are multiple studies that show that having a diverse team actually improves the output of those teams, but in this case, I'm not even specifically talking about diversity in terms of gender or race, although those are really important as well, but I'm mostly referring in this context to diversity and professional experience in background, in education, in the work that they do on a daily basis. That's the kind of diversity that we're looking for in decision-making.
23:17 Laila Bougria
The thing is that if you present a specific problem to a group of engineers, what are you going to get? Engineering-specific solutions. But what if you pull in a product owner, a tester, a graphic designer, business expert, the results might actually surprise you. And that's, again, backed up by that elevator problem that we were talking before because if we circle back to that, the idea of upgrading the and adjusting the algorithm, guess who that came from? Engineers working on the elevator. Mirrors? That came from the building managers. And again, that emphasizes how powerful that diversity can be to the solutions that we then end up coming up with.
24:04 Laila Bougria
Now, of course, it's also important to balance this, right? You can't, for every decision that you make, put 30 people in a room. I understand that, but you can pull in a few people just to force everyone to think outside the box a little bit. What's really important when you do that is to welcome different and opposing opinions. I know that a lot of environments still have a lot of gatekeeping, "You don't know anything about that. That's something that is not going to work if you want to be effective," that decisions that you make because you want their opinion and they're going to come up with things that you've never even considered were a possibility.
24:45 Laila Bougria
Now, let's go back to our horror story. Now, again, if we would ask just our engineers to solve this problem, they might come up with options like, "Let's migrate to Entity Framework Core because that's supported by Microsoft," or, "What if we build up knowledge of the NHibernate codebase? That could work as well." But if you would, for example, pull in someone who has any type of financial decision-making power, they might say, "Wait a second, this NHibernate codebase, there's people working on that, right? Couldn't we ask them whether they want to engage in some kind of a support agreement with us?" That would also work. If you then pull in people that have an understanding about compliance policies, they might say, "Well, wait a second, this is just an internal application. This is not mission-critical. We can exclude those from our compliance policy. You can just leave things as they are."
25:43 Laila Bougria
Now, once we have different alternatives, we also need to look into each one and understand whether there are viable alternative to begin with. And that starts by gathering facts and data because to ensure that we're not only making emotional decisions, we need to be able to back up what we're saying that would be a good idea. Now, the thing is that when we do that and you're looking at different options, each and every one of us, we'll feel a strong pull into a specific direction. So it's also important to be aware of that and avoid confirmation bias because we are all very, very good at using Google and very, very creative at finding stuff that will support our personal opinion. So it's also really important to go against that and to be specifically critical of the options that you tend to favor. Ask yourself why and try to even find opposing evidence to your statement, opposing opinions, right? Be your own devil's advocate, and if you find that you're unable to do that, ask for a rubber duck. Ask for someone else to do that for you.
26:56 Laila Bougria
Another thing that is really important is also to ensure that your context, basically that data that you find, that it's relevant because the thing is that in this industry, we follow innovators and we follow thought leaders, and we follow so many incredibly talented people and see what type of systems are they building and how are they building them, and that's incredibly valuable, but that has also led to situations where there are decisions like we're going to use microservices because Netflix is super, super, super successful at doing that. And we've seen plenty of talks, including at this conference, of how problematic that can also turn out to be. Or we're going to go full serverless like Amazon Prime, and then we're building this system for 12 months, and then they come out with another blog post and say, "We went back to a monolithic approach because we saved ourselves a lot of money." You're like, whoops.
27:58 Laila Bougria
Now, these articles are super valuable. I'm not trying to diminish them. They're really sources that we can learn from, but it's important that when you're gathering data and facts to support a specific option that you're proposing, that it's relevant. And the reality of the situation is that you're probably not Amazon or not Netflix or not Google, although we sometimes may hate it. So we also always need to ask ourselves, is this relevant to the situation? Is it the same type of application? Is it the same line of business? Do we have the same scaling concerns, the same amount of users and things like that? And then evaluate whether that resource is relevant to the context that you find yourself in.
28:48 Laila Bougria
Now, another thing that I feel we are commonly at fault, and I've been at fault multiple times of this as well, trying to do the best thing, is to basically design a system for the over-the-rainbow scale that we expect, but we're looking to cater for one million users. And how many users you have today? 100. And that's where I found that this quote by Werner Vogels is really valuable, where he basically says, with every order of magnitude of growth, you have 100 users, verify whether your system can basically support the next order of magnitude of growth, 1,000. When you hit 1,000, check if you can support 10,000 and adjust your architecture as you go. That also allows you a lot more flexibility in the beginning when you probably don't fully understand what you are building just yet. And that also helps us making sure that we're making the right decisions. And when we look at data, that we're comparing apples to apples and not apples to watermelons.
29:58 Laila Bougria
Again, circling back to the multiple ORMs horror story, I found a statement like this, "Moving to Entity Framework Core from NHibernate is going to increase the application speed," because according to this article, I made this up, by the way, Entity Framework Core is so much faster. It's already a very good step forward that at this point there's a reference to an article, right? We've gathered data, we've gathered facts. What is the biggest problem with this? It's completely irrelevant to the problem we're trying to solve. It's just completely disconnected. And I would even say that this is basically an assumption, but what exactly is an assumption? An assumption is basically a statement that we believe to be true and we then build upon them. But the thing is that they're shortcuts because it allows us to pick up some speed when we're speaking or when we are discussing something, but then, if those assumptions don't hold up, then everything falls apart, and our decision-making is by definition faulted, right?
31:08 Laila Bougria
One of the things that we want to do, and I'm not saying that you shouldn't make any assumptions, but rather that you want to be aware of them and that you want to write them down and then take some time to go and validate them, if you start to write them down as you go through this process, you can then go back to this as a checklist and validate them, make sure that there weren't any holes in your decision-making and act like a scientist. Why am I saying a scientist? Because they're the perfect example in our world of basically people that have the ability of questioning their deepest beliefs all the time.
31:44 Laila Bougria
So whenever you find yourself in a meeting and everyone is nodding, "Yeah, that's absolutely right. You're right, Laila," ask yourself, wait a second, why am I agreeing? Why do I think that this is true? Why am I agreeing? So you have to constantly be aware of that. What data points do we have that support this statement and write those down. That can also be very helpful when you revisit decisions later in the future to make sure that we miss any assumptions at that time. What was our thinking based on at the time? What has changed in the meantime?
32:20 Laila Bougria
So to circle back to that statement from our multiple ORMs horror story, we can now more clearly see that this is an assumption. Why? Because we're referring to an article where probably someone built a console app and did some benchmarking tests, and that ended up in it being faster, but what is faster on a per request basis? Is it nanoseconds, milliseconds, seconds? And what if we actually apply that to our application? The context is completely different. We're comparing a console app with an actual living system. If we would bring that in without actually validating first, it could even be that it becomes slower instead of faster because the variables are so different. Also, is this a problem you're trying to solve? No. Because if performance were actually a main concern, have you checked if you have any N+1 queries first? Did you check whether you have any missing indexes? Maybe you want to go to ADO.net if you really, really want to go for speed, but again, you had 100 users, so how relevant is this again? Not.
33:30 Laila Bougria
And that brings me to the risks because in our decision-making, there's always risks, but one of the risks that is always there is that one of our assumptions turns out to be false because that basically points out that we had a flaw in our thinking. Now, we also want to obviously take some time specifically to think about what risks are associated with the decision that we're trying to make. And these could be a very different kind. So these could be, for example, as a technical kind, like code complexity or introducing technical depth or something like that, but they could also be of the operational kind, oh, this is going to cost downtime, or now we're not going to be able to adhere to these SLAs or things like that. Or they could also be of the procedural kind, like following GDPR guidelines and things like that. Or they could also be of the financial reputational and humankind. They could also, for example, in many ways, affect staff. In many projects, I've also seen that we've adopted, and that was part of those projects, we've adopted architectural styles and frameworks with a team that didn't have the capabilities to work in those kind of systems, and we didn't support them with learnings or anything. It was like, yeah, you'll just learn this. And then people were struggling and messes started to happen. So that's also a really important factor.
34:53 Laila Bougria
Now, it also says that we have to quantify risks, but how do you quantify risks in software? It's terribly difficult, right? It's not that we're in a sort of financial space where you can just do number crunching. We would just be guessing, really, in software, and there are multiple techniques out there that can help. There's SWOT analysis. There's also the Delphi method and things like that. And I looked at multiple methods, and some of them are useful in some cases, but I couldn't quite find the right one.
35:27 Laila Bougria
Then I came across the risk assessment matrix. And what is that exactly? What we're basically going to do is we're going to think of a risk and ask ourselves how probable is it that this might actually occur? And if it does occur, what is going to be the impact? How severe is that going to be? And we're just going to say that this goes from insignificant to high, so we're just putting it in a range, and then we can plot it against that risk assessment matrix. And based on the colors, we'll get a sense of how low or how high this risk actually is.
36:05 Laila Bougria
Now, again, circling back to our horror story, if we would start to think about risks when evaluating the option of moving to Entity Framework Core, then we could say doing this in an incremental way is going to cause a lot of code complexity, and that might cause bugs and things like that. But also, migrating a fully functioning code that is currently running in production is going to take time. It's going to cost therefore money, but also it might require feature freezes because we're going to want to isolate some of those changes so we don't go in blind when we move to production. And finally, this hasn't really been spiked or validated, so there's an incredible amount of unknown unknowns. Maybe this is not a viable alternative at all, and that is going to be your biggest risk because you just don't really know what you're getting self into. An alternative like this requires a deeper analysis.
37:08 Laila Bougria
Now, we've been talking about evaluating possible solutions, and I've also said that doing this in a collaborative manner is really, really useful, but one of the things that I've seen and that tends to happen and we still always need to mindfully try to avoid this, is that we immediately start to compare because I propose this option and my other colleague proposed that option, and we immediately start discussing, but this is better, but that is better. But what I find is really important is that we evaluate those alternatives individually without comparing it to another alternative completely in isolation. What are the assumptions for this option? What are the risks for this option? What are the supporting facts for this specific option? And once we have a very clear overview of that, then we can engage in that trade-off analysis and start to compare. But we'll already see that there will be some options that fall out on their own that just if we look into them in isolation, we realize they're not really viable. So there's no need to engage in that comparison from the beginning because we're just making things a lot more complex for no reason.
38:18 Laila Bougria
And then finally, we're ready to propose a decision because now we understood the problem. We also identified the stakeholders that are affected, what needs we're trying to address. We also considered multiple alternatives with individually the risks, their assumptions, and gathering and supporting facts as well. And based on those findings, we can propose a decision that is skeptical, that we've thought about, that is as unbiased as possible, but we're not done just yet because what we also want to do is then go back to those risks of that specific alternative that we're putting forward as this is the way to go and ask ourselves, what are we going to do about those risks? It's not sufficient to just think about the risks. We need to ask ourselves, do we accept these?
39:12 Laila Bougria
Now, accepting is not the same thing as ignoring, right? We can choose to accept certain risks, and then what we want to do, keep track of them. That's the least amount of friction, just be aware this is a risk, okay, then we're just going to gather data points so that we can at least realize that maybe at this point the risk has become reality, and then we can deal with that later. That's usually something you'll want to do for the lower risks, but if the risks are higher, you're not going to just be as willing to just accept them. So we're going to have to think about, okay, how are we going to manage them? How are we going to even mitigate them? Is there a way that we can completely avoid them altogether? This is also a very, very important part of a structured type of decision. We can't just ignore that.
40:04 Laila Bougria
Now, I've had a lot of conversations about this type of decision-making process with a lot of people, and then it tends to come up like, "Oh, but that actually reminds me of an ADR, an architecture decision record." Who's heard of that before? Okay. Who uses them consistently? Okay, less hands. Okay. Well, the thing is that architecture decision records are basically going to document a specific software design choice that is architecturally relevant to your system. And I would say that, yes, this process that we apply is a kind of an ADR, although it's a very elaborate one. It's not the simplest structure that you'll come across because in its simplest form, an ADR will capture the current status, the context that we find ourselves in, the final decisions, and then the consequences or the risks.
40:56 Laila Bougria
But there are a lot of different templates that you can look into. So I look at this as a sort of more sophisticated template, if you will. But there's still one very big difference when it comes to the process that we apply to decision-making versus an ADR, and that's feedback, because I haven't really talked about it up until this point, but we call this decision-making process an RFC, a request for comments. Anyone who uses those? Okay, a few hands.
41:28 Laila Bougria
Now, the reality for us is that feedback is really the backbone of this decision-making process. And what happens is that when an RFC is created, it's published through the entire organization. Now, we're around 40 people. If you're a lot larger than that, then maybe it makes more sense to do it within a team or department or things like that. But the point is that they remain open for feedback for a specific period of time, and then anyone can basically go in, read that what's the problem, what are you looking at, what alternatives that you consider, and follow you through that decision-making basically. And anyone is able to read it and also provide feedback. That gives you a very, very open culture and brings in that diversity again. It's also really encouraged. We look at RFCs and we find them successful when there's a lot of feedback. Those are the ones that we actually think this was awesome.
42:28 Laila Bougria
Now, the thing is that when you open up something for everyone to be able to comment on, what happens? We disagree, and that's fine. Sometimes we disagree. That's totally okay. And that's where we have to remember, what is the goal of this feedback? And it's not to get alignment on the decision that we're taking. We're just trying to validate the decision-making, right? Did we follow all of the right decision-making guidelines? Did we apply critical thinking?
43:01 Laila Bougria
Now, the thing is that we're trying to think of are we solving a valid problem? What need are we trying to address? Did we consider all the right alternatives? Are there any assumptions that are missing? Are there any assumptions that are wrong? And are there any risks that we missed? So what we are basically doing is based on that feedback, we can go back and reevaluate whether we followed the right decision-making practices, and that's how we stay friends because by structuring that feedback as well, we can also have critical thinking on the reviewing side.
43:40 Laila Bougria
So what we then want to do is we'll read an RFC, for example, and I'm like, "What? What are you trying to do? I completely disagree with this," but then my job at that point is to translate that gut-based, no reaction, "I disagree. I don't want this to happen" to, "Are there any missing alternatives? Did they miss any assumptions? Is there any assumption there that is false or are there any missing risks?" And by structuring it that way, first of all, we stop being like, "I don't like what you're doing," but we're making it a constructive type of conversation as well. And even when we disagree, if you can't poke holes in the thinking, if you can't poke any holes in decision-making, that led to this specific decision being proposed, and it's a lot easier to accept it because you don't have any valid points to fight back at that point. Right?
44:40 Laila Bougria
Now, it's also important for me to stress that there has to be some balance because unless that wasn't clear up to this point, this is not a lightweight process. It takes a lot of time to gather all of these facts, to think about this in many ways, to have multiple people in that process involved. So this is just not something that you can apply to any type of decision that you're making in an organization because you just stop moving. It's important therefore to differentiate between the smaller decisions and the larger decisions.
45:14 Laila Bougria
What is a small decision then? Well, one of the things ways that I like to think about it is ask myself, how easily can we reverse this if we would just move forward and go with the first option that we basically think of? And ask ourselves, how easy is it to reverse? Tomorrow, in a week, in a month, in a year? And based on that impact, we can basically start to see how impactful that decision may be. And basically, the higher the impact, the higher the return on investment on actually going through a structured process.
45:48 Laila Bougria
So what if you're thinking, "This is all awesome, Laila. I love this, but you know what? I'm not the decision-maker at my company and I can't just walk in tomorrow and introduce a process like this. So thanks, but no thanks." Well, I totally get that, but I will still challenge you immediately because there are things that you can take away and start to basically implement slowly, and I'm going to help you through that. It's important that you realize that this process that I talked about, we also just didn't introduce it from one day to another. This has been iterated on and evolved over years and years of retrospections and thinking about what works and what doesn't, right? That's what got us here. We didn't just introduce this from one day to another.
46:32 Laila Bougria
So where do you start? Well, you start by looking out for hidden decisions. Who engages in code reviews? Okay, that's most hands. That's a very good place to start. Did someone introduce a new framework, new architectural style, just a significant type of design decision that maybe requires some discussion? That's a very good place to have that discussion. Meeting notes. I know not everyone has structured meeting notes, but if you do, they usually contain decisions. Another thing is any type of communication, email, Slack, usually people agreeing on something, those are decisions, right? And also during meetings, if you find that a lot of people are nodding and agreeing and sure, then those are probably also maybe decisions that you also want to take away.
47:26 Laila Bougria
Now, what do you then do with those? You just write them down. That's the lowest point of friction. Just write them down somewhere. And it doesn't have to take a lot of effort to do this. It could just be a single paragraph to start with. Start low friction. And then, writing things down, it's basically already going to lead to some reflection, to some reevaluation. Why are we doing this again? Wait, it made sense in the meeting, but now I'm not sure again.
47:59 Laila Bougria
Also, make them easy to find. How many times have we spent, I don't know how much time trying to reconvince people of doing the same thing we already agreed upon weeks ago, but we have to go through that effort again because nobody took the time to write it down. And then you can slowly evolve to the simplest form of architecture decision records and work your way up. Start incorporating those building blocks that we've seen throughout the session and incorporating that bit by bit. And it's basically like planting a seed, basically.
48:31 Laila Bougria
Who's read the Atomic Habits book? Well, he basically says in the book that if you would make a small 1% change every day on a yearly basis, that accumulates to a 37% change. So that's quite impactful. Now, I know that it may also be frustrating to have to go through this slowly, and you have to be patient, but you have to focus on the compounding effect. How is that effect going to be over a year? And also, don't forget that at each step that you make a small improvement, you're already going to have made a very big impact on the decisions that are being made.
49:13 Laila Bougria
What's also important is that when it's your turn to make a decision is that you lead by example. Capture your own decision-making, incorporate those building blocks and then present it to others. Ask for feedback, ask them for their opinion as well. And then when you see others making decisions, try to also structure your own feedback, not into saying, "I don't like this. I disagree with this," but rather try to point out unclear problem statements. What problem are you really trying to solve here? What other alternatives did you maybe not consider? And are there any missing assumptions and risks? And then, finally, when you find people fighting over options, then help them structure your feedback as well to get it into a constructive type of conversation as well.
50:03 Laila Bougria
And you have to be patient and you have to speak up. Now, I know that I'm saying this and it's easily said, and I also understand that I'm a bit privileged in that regard, first because it doesn't come hard to me to speak up, and second, because I'm in an organization that really supports and actually wants me to do that. I know it's not easy to do that everywhere, but it's important that we understand that saying something once is not going to lead to durable change. We're going to have to consistently point these things out until people start seeing that pattern as well. And of course, we also have to frame that feedback positively and constructively so you're not perceived as just this annoying team member.
50:48 Laila Bougria
Another thing to do is to basically use collective pronouns. Let me just build upon what you just said. What if we would consider this option. By using those collective pronouns? We can basically also boost collaboration. And I am not making it a you problem, you didn't consider this, but rather, can we maybe look at this option together? And when you see that people start to incorporate some of these things, celebrate that. That's positive change. You want to encourage that further and be patient. You have to be persistent and give it time.
51:24 Laila Bougria
Okay, that brings me to the end of the talk. I hope that throughout this last hour, I've convinced you that critical thinking is not something that you're born with. It's basically a skill that you have to work on and you have to train. It's like muscle memory, and it gets easier with time, but it takes practice. |So it definitely is something that you will have to keep doing. Clearly, defining the problem is definitely where you should be starting. Make sure that you're solving the right problems. Otherwise, what are you really doing with your time? Right? Structure your decisions around multiple alternatives. Never look at just one options. There's always multiple ways in which we can solve a single problem. That's the beauty of our work, at least I find that a super nice way to look at it. And think in terms of risks and assumptions and supporting data points as well. And start small, but always write things down, and also structure your feedback and speak up, lead by example, and help others structure their feedback.
52:28 Laila Bougria
Thanks a lot for listening. If you scan the QR code, it will take you to one of my GitHub repositories where I have a bunch of backing articles. If you want to read a little bit more, if you have any questions, I'm happy to hear them. I know it's the end of the day. I'm still here tomorrow. Feel free to stop me in the hall or connect online. Thanks a lot for listening.